Amata: An Annealing Mechanism for Adversarial Training Acceleration

نویسندگان

چکیده

Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data much degrade their performance. This is known as adversarial attacks. To counter attacks, training formulated a form of robust optimization demonstrated be effective. However, conducting brings computational overhead compared with standard training. In order reduce cost, we propose an annealing mechanism, Amata, associated The proposed Amata provably convergent, well-motivated from lens optimal control theory and can combined existing acceleration methods further enhance It on datasets, achieve similar or better robustness around 1/3 1/2 time traditional methods. addition, incorporated into other algorithms (e.g. YOPO, Free, Fast, ATTA), which leads reduction large-scale problems.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial Training for Relation Extraction

Adversarial training is a mean of regularizing classification algorithms by generating adversarial noise to the training data. We apply adversarial training in relation extraction within the multi-instance multi-label learning framework. We evaluate various neural network architectures on two different datasets. Experimental results demonstrate that adversarial training is generally effective f...

متن کامل

Adversarial Training for Sketch Retrieval

Generative Adversarial Networks (GAN) can learn excellent representations for unlabelled data which have been applied to image generation and scene classification. The representations have not yet to the best of our knowledge been applied to visual search. In this paper, we show that representations learned by GANs can be applied to visual search within heritage documents that contain Merchant ...

متن کامل

Adversarial Training for Unsupervised Bilingual Lexicon Induction

Word embeddings are well known to capture linguistic regularities of the language on which they are trained. Researchers also observe that these regularities can transfer across languages. However, previous endeavors to connect separate monolingual word embeddings typically require cross-lingual signals as supervision, either in the form of parallel corpus or seed lexicon. In this work, we show...

متن کامل

A-NICE-MC: Adversarial Training for MCMC

Existing Markov Chain Monte Carlo (MCMC) methods are either based on generalpurpose and domain-agnostic schemes, which can lead to slow convergence, or problem-specific proposals hand-crafted by an expert. In this paper, we propose ANICE-MC, a novel method to automatically design efficient Markov chain kernels tailored for a specific domain. First, we propose an efficient likelihood-free advers...

متن کامل

Adversarial Training for Probabilistic Spiking Neural Networks

Classifiers trained using conventional empirical risk minimization or maximum likelihood methods are known to suffer dramatic performance degradations when tested over examples adversarially selected based on knowledge of the classifier’s decision rule. Due to the prominence of Artificial Neural Networks (ANNs) as classifiers, their sensitivity to adversarial examples, as well as robust trainin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i12.17278